ethnic minority
Machine Learning Applications in Studying Mental Health Among Immigrants and Racial and Ethnic Minorities: A Systematic Review
Park, Khushbu Khatri, Ahmed, Abdulaziz, Al-Garadi, Mohammed Ali
Background: The use of machine learning (ML) in mental health (MH) research is increasing, especially as new, more complex data types become available to analyze. By systematically examining the published literature, this review aims to uncover potential gaps in the current use of ML to study MH in vulnerable populations of immigrants, refugees, migrants, and racial and ethnic minorities. Methods: In this systematic review, we queried Google Scholar for ML-related terms, MH-related terms, and a population of a focus search term strung together with Boolean operators. Backward reference searching was also conducted. Included peer-reviewed studies reported using a method or application of ML in an MH context and focused on the populations of interest. We did not have date cutoffs. Publications were excluded if they were narrative or did not exclusively focus on a minority population from the respective country. Data including study context, the focus of mental healthcare, sample, data type, type of ML algorithm used, and algorithm performance was extracted from each. Results: Our search strategies resulted in 67,410 listed articles from Google Scholar. Ultimately, 12 were included. All the articles were published within the last 6 years, and half of them studied populations within the US. Most reviewed studies used supervised learning to explain or predict MH outcomes. Some publications used up to 16 models to determine the best predictive power. Almost half of the included publications did not discuss their cross-validation method. Conclusions: The included studies provide proof-of-concept for the potential use of ML algorithms to address MH concerns in these special populations, few as they may be. Our systematic review finds that the clinical application of these models for classifying and predicting MH disorders is still under development.
- Asia > Middle East > Republic of Türkiye (0.04)
- Europe > Germany (0.04)
- North America > United States > Tennessee > Davidson County > Nashville (0.04)
- (7 more...)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
- Research Report > New Finding (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis (0.67)
Diverse Teams Are Needed to Save the Planet
Engineering has a white-male problem. Women make up just 14.5 percent of the engineering workforce in the United Kingdom, with ethnic minorities constituting just 8 percent. For Lila Ibrahim, chief operating officer at DeepMind, and Hayaatun Sillem, CEO of the Royal Academy of Engineering, being both female and people of color meant the odds were stacked against them in their industry. But for Sillem, who is the first woman and ethnic minority to hold her position, coming from such a diverse background helped her "to build empathy into her life"--a trait she describes as a superpower. And as for Ibrahim, the daughter of immigrants to the United States, she always felt like the "oddball" growing up in midwestern America.
- North America > United States (0.26)
- Europe > United Kingdom (0.26)
UK data watchdog investigates whether AI systems show racial bias
The UK data watchdog is to investigate whether artificial intelligence systems are showing racial bias when dealing with job applications. The Information Commissioner's Office said AI-driven discrimination could have "damaging consequences for people's lives" and lead to someone being rejected for a job or being wrongfully denied a bank loan or a welfare benefit. It will investigate the use of algorithms to sift through job applications, amid concerns that they are affecting employment opportunities for people from ethnic minorities. "We will be investigating concerns over the use of algorithms to sift recruitment applications, which could be negatively impacting employment opportunities of those from diverse backgrounds," said the ICO. The investigation is being announced as part of a three-year plan for the ICO under the UK's new information commissioner, John Edwards, who joined the ICO in January after running its New Zealand counterpart.
- Oceania > New Zealand (0.25)
- North America > United States (0.05)
- Europe > United Kingdom (0.05)
- Law (0.31)
- Information Technology > Security & Privacy (0.31)
Is The Idea Of Digitization Being A Great Leveler A Myth?
The world has always been a lopsided, unfair mess--a statement that holds true regardless of whatever business sector you talk about or whichever country you visit. The rich, despite constituting less than 5% of the global population, always seem to wield an unfair influence over the rest--in a relative sense, the have-nots. Giant corporations trample over local businesses when they set up shop in a new country. Issues such as racism, sexism and unfair economic divide have been prevalent for what feels like an eternity. Technologies such as AI, computer vision and NLP were supposed to bridge this gap.
- Information Technology (0.99)
- Law > Civil Rights & Constitutional Law (0.35)
- Health & Medicine > Therapeutic Area (0.31)
China's SenseTime Postpones IPO After U.S. Blacklisting
SenseTime, whose products are used in smart cities, police surveillance and autonomous driving, had planned to raise as much as $767 million in an initial share sale on the Hong Kong stock exchange this month. On Friday, the U.S. Treasury Department named SenseTime among 25 individuals and entities that it said were connected to "human rights abuse and repression in several countries around the globe," and added it to a list of companies that it says are supporting China's military. U.S. authorities cited the use of SenseTime's facial-recognition technology in China's effort to suppress and assimilate mainly Muslim ethnic minorities in western China. The designation prevents Americans from investing in the company. On Saturday, SenseTime said the accusation was baseless and reflected "a fundamental misunderstanding of our company."
- North America > United States (1.00)
- Asia > China > Hong Kong (0.29)
- Asia > China > Beijing > Beijing (0.12)
- (3 more...)
- Government > Tax (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance (1.00)
US Treasury rolls out raft of sanctions on int'l Human Rights Day
The United States Treasury slapped sanctions on 25 individuals and entities on Friday, citing human rights abuses, and blacklisted a Chinese maker of artificial intelligence (AI) facial recognition software, citing its role in the repression of Muslim Uighurs and other religious and ethnic minorities in Xinjiang. In addition to China, Friday's raft of sanctions targeted people and entities linked to human rights abuses in Myanmar, North Korea and Bangladesh. Canada and the United Kingdom joined the US in announcing sanctions over repression in Myanmar. "On International Human Rights Day, Treasury is using its tools to expose and hold accountable perpetrators of serious human rights abuse," said Deputy Secretary of the Treasury Wally Adeyemo in a statement posted on the department's website. Treasury added AI firm SenseTime Group Limited to a list of Chinese blacklisted firms for developing facial recognition programmes "that can determine a target's ethnicity, with a particular focus on identifying ethnic Uyghurs".
- Asia > North Korea (0.62)
- Asia > Myanmar (0.50)
- Asia > Bangladesh (0.30)
- (4 more...)
Europe's artificial intelligence blindspot: Race
Europe's vision of artificial intelligence regulation is color-blind -- and not in a good way. Between the U.S.'s laissez-faire and China's dirigiste approaches, the EU is intent on carving out a "third way" for AI regulation that boosts innovation but respects "European values," including privacy and human rights. But activists and academics fear the rules will not consider the communities most at risk of AI-based discrimination -- people of color. In recent years, there have been high-profile examples of AI systems discriminating against racial minorities, including facial recognition systems that don't recognize women or black and brown faces; opaque, unenforceable and discriminatory hiring algorithms; or applications that predict disproportionate criminality and offer worse legal outcomes. The European Commission will unveil its AI rules this spring, requiring "high-risk" AI systems to meet minimum standards regarding trustworthiness.
- Asia > China (0.25)
- Europe > Netherlands (0.15)
- Law > Civil Rights & Constitutional Law (0.92)
- Government > Regional Government > Europe Government (0.36)
As China tracked Muslims, Alibaba showed customers how they could, too
As the Chinese government tracked and persecuted members of predominantly Muslim minority groups, technology giant Alibaba taught its corporate customers how they could play a part. Alibaba's website for its cloud computing business showed how clients could use its software to detect the faces of Uighurs and other ethnic minorities within images and videos, according to pages on the site that were discovered by the surveillance industry publication IPVM and shared with The New York Times. The feature was built into Alibaba software that helps web platforms monitor digital content for material related to terrorism, pornography and other red-flag categories, the website said. The Chinese government has swept hundreds of thousands of Uighurs and others into indoctrination camps as part of what it calls an anti-terrorism campaign. It has also rolled out a broad surveillance dragnet, using facial recognition and genetic testing, to monitor them.
- Information Technology > Cloud Computing (0.51)
- Information Technology > Communications (0.50)
- Information Technology > Artificial Intelligence (0.38)
It's too late to ban face recognition – here's what we need instead
Calls for an outright ban on face recognition technology are growing louder, but it is already too late. Given its widespread use by tech companies and the police, permanently rolling back the technology is impossible. It was widely reported this week that the European Commission is considering a temporary ban on the use of face recognition in public spaces. The proposed hiatus of up to five years, according to a white paper obtained by news site Politico, would aim to give politicians in Europe time to develop measures to mitigate the potential risks associated with the technology. Several US cities, including San Francisco, are mulling or have enacted similar bans.
- North America > United States > California > San Francisco County > San Francisco (0.26)
- Europe > United Kingdom (0.06)
- Asia > China > Xinjiang Uygur Autonomous Region (0.06)
China Uses AI to Flag Thousands of Uyghurs for Detention: Report
Leaked Chinese Communist Party (CCP) documents have revealed how China uses artificial intelligence to round up Uyghurs and other ethnic minorities for detention in Xinjiang's network of mass internment camps. The classified documents, made public by the International Consortium of Investigative Journalists (ICIJ) on Nov. 24, has also uncovered the repressive inner workings of the detention camps in the troubled western region, where at least one million are believed to have been detained, according to figures quoted by the U.S. Congressional-Executive Commission on China and the United Nations. NEW: #ChinaCables is a leak of highly classified Chinese documents that expose the inner workings of mass detention camps in Xinjiang and reveal, in the government's own words, how it manages the day-to-day internment and forced indoctrination of Uighurs. In the second major leak in just days of the inner-workings of the CCP in Xinjiang, the papers--the China Cables--reveal that "Chinese police are guided by a massive data collection and analysis system that uses artificial intelligence to select entire categories of Xinjiang residents for detention." In the space of just one week, the names of hundreds of thousands of Uyghurs and other ethnic minorities in the region were issued for arrest and interrogation using data collected by mass surveillance technology, according to the ICIJ report.
- Information Technology > Security & Privacy (0.90)
- Government > Regional Government > Asia Government > China Government (0.38)